E 2 201 : Information Theory ( 2015 ) Solution to Homework 6 Prepared by : Himanshu Tyagi

نویسنده

  • Himanshu Tyagi
چکیده

(b) Let PX(1) = p = 1−PX(0). Then I(X∧Y ) = H(Y )−H(Y |X) = h(p2)−p. Differentiation yields that I(X ∧ Y ) is maximized by p = 2e−2 1+e−2 . Hence, the maximizing input pmf on {0, 1} is ( 1−e−2 1+e−2 , 2e−2 1+e−2 ) and C = h ( e−2 1+e−2 ) − 2e−2 1+e−2 bits/channel use. ∗Q2,3,4,6,7 and their solutions are from the Information Theory course taught by Prof. Prakash Narayan at the University of Maryland, College Park.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Universal Source Codes Lecturer : Himanshu Tyagi Scribe : Sandip Sinha

• Huffman Code (optimal prefix-free code) • Shannon-Fano code • Shannon-Fano-Elias code • Arithmetic code (can handle a sequence of symbols) In general, the first three codes do not achieve the optimal rate H(X), and there are no immediate extensions of these codes to rate-optimal codes for a sequence of symbols. On the other hand, arithmetic coding is rate-optimal. However, all these schemes a...

متن کامل

Information Theoretic Cryptography for Information Theorists (Notes for a tutorial at ISIT 2017)

Randomness extraction refers to generating almost uniform bits as a function of a given random variable X. This will play a central role in all our applications, in particular, in the privacy amplification step of secret key agreement. The form of functions that can enable randomness extraction depends on the underlying class of distributions of X for which we want the extraction to work. We be...

متن کامل

E 6712 : Homework 1 - Solution Prepared

(a) Evaluate analytically the error probability performance of an optimal receiver for N = 1 and m ∈ {1, 2, 3, 4, 5, 6}, and for N = 4 and m ∈ {1, 2, 3, 4, 5, 6}. Plot the error probability v.s. m on a single graph for both N = 1 and N = 4. using the logarithmic scale for the error probability axis. (b) Write a computer program (see the note below) to evaluate by simulation the error probabilit...

متن کامل

General Sample Complexity lower bounds for Parametric Estimation

In the previous lecture, we introduced the notion of minimax error/risk, and looked at how it can be lower bounded by sample complexity for a well studied case: estimating the mean of a Gaussian distribution with known variance. In this lecture, we introduce techniques which work for more general parametric classes, using which the sample complexity lower bounds derived in the previous lecture ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015